专利摘要:
The invention relates to a method for encoding an illustration, comprising - writing the illustration in grayscale or color, - encoding a message in the form of a two-dimensional barcode comprising a set of blocks, each block encoding a fragment of said message and comprising a set of (M) rows and (N) columns, and each block comprising a set of coding sub-blocks, each sub-block comprising a set of bits. It is essentially characterized in that the encoding step comprises - defining or identifying a set of remarkable points on the source illustration, - calculating a set of attributes according to certain remarkable points, - selecting, among the set attributes, at least one attribute making it possible to define a digital fingerprint, optionally compressed, - optionally to sign said digital fingerprint, - to record in the message one of - at least one attribute, - the digital fingerprint, possibly compressed, - signed digital fingerprint, compressed or not.
公开号:FR3058543A1
申请号:FR1757021
申请日:2017-07-24
公开日:2018-05-11
发明作者:Marc Pic;Mohammed Amine Ouddan;Hugues Souparis
申请人:Surys SA;
IPC主号:
专利说明:

058 543
57021 ® FRENCH REPUBLIC
NATIONAL INSTITUTE OF INDUSTRIAL PROPERTY
COURBEVOIE © Publication number:
(to be used only for reproduction orders) (© National registration number © IntCI 8 : G 06 K 19/06 (2017.01)
PATENT INVENTION APPLICATION
A1
©) Date of deposit: 24.07.17. (© Applicant (s): SURYS Simplified joint-stock company © Priority: 09.11.16 FR 1660874; 03.05.17 FR - FR. 1753902. @ Inventor (s): PIC MARC, OUDDAN MOHAMMED AMINE and SOUPARIS HUGUES. (43) Date of public availability of the request: 11.05.18 Bulletin 18/19. ©) List of documents cited in the report preliminary research: Refer to end of present booklet (© References to other national documents (® Holder (s): SURYS Simplified joint-stock company. related: ©) Extension request (s): (© Agent (s): INPUT IP.
(□ 4 / PROCESS FOR AUTHENTICATING AN ILLUSTRATION.
FR 3 058 543 - A1 (3 / j The invention relates to a method for encoding an illustration, comprising
- write the illustration in grayscale or in color,
- encoding a message in the form of a two-dimensional barcode comprising a set of blocks, each block coding a fragment of said message and comprising a set of (M) rows and (N) columns, and each block comprising a set coding sub-blocks, each sub-block comprising a set of binary elements.
It is essentially characterized in that the encoding step comprises
- define or identify a set of remarkable points on the source illustration,
- calculate a set of attributes based on certain remarkable points,
- select, among all the attributes, at least one attribute allowing to define a digital imprint, optionally compressed,
- optionally sign said digital imprint,
- record in the message one of
- at least one attribute,
- the digital fingerprint, possibly compressed,
- the signed digital imprint, compressed or not.
PhotoMetriX by SURYS
Identîfy Number 54338 SumameTHARIA Given namesJULIETTE SexF Date birth09/12/1987 Place of birthParis, France Date of issue09/01/2015 Place of issueParis Date oiexpiiy 01/09/2020
PROCESS FOR AUTHENTICATING AN ILLUSTRATION.
FIELD OF THE INVENTION
The present invention relates to the field of authentication of an illustration.
By illustration is meant any non-uniform graphic representation; for example a painting, a drawing, a photograph, etc.
For the sake of brevity, the case where the illustration is a photograph, in particular a portrait, will be mainly described here.
In this context, the present invention finds a particular application in the field of verification of identity documents, typically official identity documents (identity card, passport, driving license, etc.) or unofficial (card subscription, etc.), where the illustration is a photograph of the identity document holder.
Indeed, the falsification of identity documents mainly concerns the replacement of the identity photo. If for a long time this replacement could be succinct enough, it has become more complex in recent years with the use of "morphed" images.
By "morphed" image is meant the image resulting from a morphological transformation or morphing, between the original photograph of the legitimate holder of the identity document and that of a fraudster who wishes to use this identity document.
For a fraudster, the identity document undergoing manipulation is for example chosen so that the legitimate holder shares a certain number of morphological features with the fraudster. This morphological resemblance between the legitimate bearer and the fraudster facilitates the work of the counterfeiter who prints this morphing on the identity document (keeping the other security elements intact), which makes it possible to bypass a visual and sometimes even automatic control, everything by remaining visually compatible with the other security elements of the identity document which echo the photograph, such as for example a ghost image, an image by drilling holes, etc.
The objective of the present invention therefore aims to ensure that the illustration, in this case the photograph present on the identity document, is the original, that is to say that it has not been manipulated in one way or another. It is therefore a question of authenticating the illustration, for example as it should have been on the day of production of the identity document. It can also be used to authenticate the holder of the document or the subject of the photograph. In this sense, the present invention deals with photometry and not biometry.
It obviously applies to the field of security, as well as to the field of art.
In the field of security, document US2015 / 0332136 is known, which aims to secure an identity photograph by surrounding it with a 2D barcode, the 2D barcode being based on alphanumeric data.
The present invention aims to provide an alternative and more secure solution.
SUMMARY OF THE INVENTION
More specifically, the invention relates, according to a first of its objects, to a method of encoding a source color illustration, comprising steps consisting in:
- enter said source illustration in the form of an illustration in grayscale or in color,
- encoding a message in the form of a two-dimensional barcode comprising a set of blocks, each block coding a fragment of said message and comprising a set of (M) rows and (N) columns, and each block comprising a set coding sub-blocks, each sub-block comprising a set of binary elements.
It is essentially characterized in that the encoding step comprises preliminary steps consisting in:
- define or identify a set of remarkable points on the source illustration,
- calculate a set of attributes as a function of at least some of the remarkable points of said set,
- select, from among all of the calculated attributes, a set of at least one attribute making it possible to define a digital fingerprint,
- optionally compress said digital fingerprint,
- optionally sign said digital fingerprint by means of a cryptographic signature, and
- record in the message one of:
- a set of at least one attribute, the digital fingerprint,
- the compressed digital fingerprint,
- the signed uncompressed digital fingerprint and
- the digital fingerprint compressed and signed.
In one embodiment, a step is further provided consisting in:
- encode at least some color information from the source illustration in said two-dimensional barcode.
In one embodiment, at least one of the steps is further provided:
- write said two-dimensional barcode on a destination medium;
- securing said registered illustration and said destination medium;
- make the two-dimensional barcode integral with the registered illustration; and
- arrange the two-dimensional barcode in a predefined manner in relation to the registered illustration, optionally by framing said registered illustration.
Thanks to this characteristic, it is possible, as described later, to scan for example with an communicating object all the information useful for verification simultaneously.
In one embodiment, each coding block further comprises a set of non-coding sub-blocks whose position is predefined, the method further comprising, for each coding block, steps consisting in:
- select a predefined set of coding sub-blocks,
- encoding on a set of at least one predefined non-coding sub-block the result of the application of an error correcting code to the values encoded by said predefined set of coding sub-blocks.
We can plan a step consisting in:
- digitize the source illustration if it is not already in digital form; and Either:
- decompose the digitized source illustration into a set of zones, each zone being defined:
- by a primitive, determined by a shape recognition algorithm, a biometric recognition algorithm or a color gradient; or
- by a contour detection algorithm; or
- by application to the source illustration of a static geometric grid having a predetermined shape, optionally that of a face, the grid comprising a set of zones; and
- associate each zone with a respective color or color information;
Is :
- convert the digitized source illustration, possibly resized, in the Hue Saturation Value space comprising a hue component and a saturation component; and
- compress the color information of the source illustration in the shade space and in the saturation space by:
- decomposing said hue component and said saturation component into a set of (k) respective zones, each zone corresponding to a homogeneous set of pixels whose hue, respectively saturation, is uniform; and
- associate with each zone a respective hue (Ti) and saturation (Si) and save them in a dictionary.
Provision may be made for the step of selecting a set of at least one attribute making it possible to define a digital fingerprint comprising a step consisting in selecting a number of attributes greater than a threshold value recorded in a memory.
We can plan at least one of the following steps:
- add extrinsic data to the source artwork to the digital footprint;
- select a number of attributes greater than a threshold value recorded in a memory from the set of calculated attributes; and
- encode at least certain colors or color information of the source illustration in the form of metadata of said barcode.
We can provide at least one of the steps consisting in, for each block:
- arrange the coding sub-blocks and the non-coding sub-blocks so that:
o at least one line of the block comprises a set of adjacent coding sub-blocks two by two, said set being surrounded by a set of non-coding subblocks, and o at least one column of the block comprises a set of coding sub-blocks adjacent two by two, said assembly being surrounded by a set of non-coding subblocks; and
- arrange the coding sub-blocks and the non-coding sub-blocks so that:
o for at least one column comprising a set of coding sub-blocks, each non-coding sub-block of said column encodes a respective result of the application of a respective error correcting code to the values encoded by said set of sub -coding blocks of said column, o for at least one line comprising a set of coding sub-blocks, each non-coding sub-block of said line encodes a respective result of the application of a respective error correcting code to the values encoded by said set of coding sub-blocks of said line.
According to another of its objects, the invention relates to a method of authenticating an illustration inscribed in grayscale or in color, derived from a source illustration in color, comprising steps consisting in:
digitize by an optical sensor:
o an illustration inscribed in grayscale or in color, and o a two-dimensional barcode encoded according to the invention, and encoding a message and metadata, decoding the message of said two-dimensional barcode, and comparing the message decoded and the illustration inscribed.
We can provide steps consisting in:
- in the illustration listed read by the optical sensor:
o define or identify a set of remarkable points on the inscribed illustration read by the optical sensor, o recalculate a set of attributes according to at least some of the remarkable points of said set, o select from the set of recalculated attributes, a set of at least one attribute making it possible to define a digital fingerprint,
comparing the value of the difference between the decoded attributes of said two-dimensional barcode and the recalculated attributes with a predetermined threshold value recorded in a memory, and
- optionally verify the cryptographic signature.
It is also possible to provide steps consisting in:
- calculate a reconstructed illustration of the source illustration by extracting the message or metadata from said two-dimensional barcode and from said extracted message or said extracted metadata by following one of the sequences of steps A, B, C or D following, consisting of:
A1. identify a set of primitives among said metadata, A2. apply each primitive to the digitized inscribed illustration to recalculate each area on the digitized inscribed illustration and colorize each area with its respective associated color; and
A3, merge said recalculated colored areas and the digitized registered illustration; or
B1. define a set of areas of the registered illustration digitized by an edge detection algorithm;
B2. establish a unique correspondence between each zone of the source illustration and each zone of the digitized registered illustration; B3. color each area of the digitized inscribed illustration with the color of the corresponding source illustration area extracted from the metadata; or
C1. define a set of areas of the digitized inscribed illustration by applying a static geometric grid having a predetermined shape, optionally that of a face, to the digitized inscribed illustration;
C2. establish a unique correspondence between each zone of the source illustration and each zone of the digitized registered illustration; C3. color each area of the digitized inscribed illustration with the color of the corresponding source illustration area extracted from the metadata; or
D1. extract from a dictionary the values of the set of classes (Ti) in the shade space and in the saturation space (Si),
D2. recompose all (k) areas of the shade space and the saturation area by decompressing the two-dimensional barcode,
D3. optionally resize the reconstructed illustration in the saturation space and in the color space to the dimensions of the digitized registered illustration,
D4. reconstruct the gloss component from the digitized inscribed illustration, or from the digitized source illustration,
D5. calculate the components of hue brightness and saturation of the reconstructed illustration by combination of the set of (k) zones of the hue space and of the saturation space recomposed in step D2, and the component of shine reconstructed in step D4,
D6. converting the hue and saturation gloss components into red green blue values, and
- display the reconstructed illustration on a display screen; the method optionally further comprising a filling step consisting in applying a restoration algorithm to the reconstructed illustration prior to the display step.
Finally, according to another of its objects, the invention relates to a computer program comprising program code instructions for the execution of the steps of the method according to the invention, when said program is executed on a computer.
Other characteristics and advantages of the present invention will appear more clearly on reading the following description given by way of illustrative and nonlimiting example and made with reference to the appended figures.
DESCRIPTION OF THE DRAWINGS FIG. 1 illustrates a source illustration within the meaning of the present invention, in this case an identity photograph, FIG. 2 illustrates an embodiment of a 2D barcode within the meaning of the present l invention, surrounding the illustration of FIG. 1, and comprising extrinsic data within the meaning of the present invention, FIG. 3 illustrates metric attributes within the meaning of the present invention, FIG. 4 illustrates an embodiment of a block within the meaning of the present invention, FIG. 5 illustrates the illustration of FIG. 1 after decomposition by primitives into a set of zones, each zone being colored with a respective color, FIG. 6 illustrates an illustration recalculated and colored from the 2D barcode of FIG. 2, FIG. 7 illustrates the image resulting from the fusion between the recalculated colored zones of FIG. 6 and the registered illustration of digitized FIG. 2, FIG. 8 illustrates schematically an s encoding rate of the illustration in Figure 1 in 2D barcode in which each area of the illustration is defined by a primitive, Figure 9 illustrates the decoding of the 2D barcode in Figure 8;
FIG. 10 schematically illustrates an encoding sequence of the illustration of FIG. 1 in 2D bar code in which each zone of the illustration is defined by a recognition of contours, FIG. 11 illustrates the decoding of the 2D bar code of Figure 10;
FIG. 12A and FIG. 12B illustrate the reconstruction of an illustration by decoding the 2D barcode encoded with the original portrait;
FIG. 13A and FIG. 13B illustrate the reconstruction of an illustration by decoding the 2D barcode encoded with the original portrait but for which the illustrations were reversed after the encoding of the 2D barcode;
Figure 14A illustrates the illustration of Figure 1 digitized in the shade space, and Figure 14B illustrates the illustration of Figure 1 digitized in the saturation space.
DETAILED DESCRIPTION
A source illustration, in color, is shown in Figure 1. In this case, the source illustration and an identity photograph, in particular for an identity document.
First, there is a step to digitize the source illustration if it is not already in digital form. By simplification, we mean here without distinction the source illustration or its corresponding digital file.
In order to subsequently authenticate the source illustration and verify its integrity, that is to say its non-falsification, an encoding mechanism is proposed described below.
Remarkable points
We plan a step consisting in defining, or identifying, a set of remarkable points on the source illustration.
A remarkable point is defined as a point in the source illustration, that is to say a pixel or a set of adjacent pixels two by two, for which the contrast gradient, according to a predefined direction and distance, is greater to a predefined threshold value.
For example, a remarkable point is an impression point of an alphanumeric or kanji character. A remarkable point can also be purely graphic, that is to say non-alphanumeric, such as for example a point of the iris of an eye or a fixed point of the support on which the source illustration is printed, for example a character a serial number.
For a photograph of a person, remarkable points can be for example usual elements of biometrics such as the eyes, the nose, the commissure of the lips the center of the mouth, etc.
More generally, remarkable points can be graphic elements located in an environment with specific physical or mathematical characteristics (in image processing), such as for example graphic elements around which an intense gradient forms, or which meet the criteria of image processing such as Stephen-Harris detectors. By "intense" gradient is meant a gradient whose value is greater than a threshold value.
Attributes
From the set of remarkable points, a step is planned consisting in calculating a set of attributes which contribute to the identifiable, even unique, character of the source illustration. Attributes include a set of metrics, that is, a set of distances or angles between certain remarkable points.
The attributes include, for example, the coordinates of remarkable points (relative to a predetermined reference point), the distances between certain remarkable points, values of contrast gradient around the remarkable points, etc.
The position of a predetermined remarkable point of the source illustration in its representation context (for example in the frame presented in FIG. 2), whether this position is random or imposed, can also be an attribute, for example a remarkable point perhaps :
- an edge of the source illustration,
- a benchmark calculated biometrically, such as the center of gravity of the eyes or the mouth.
For a photograph of a person, attributes can be for example usual elements of biometrics such as distance ratios between the positions of the eyes, the nose, the commissure of the lips or the center of the mouth, or angles between these same elements, etc. as illustrated in figure 3.
Attributes can be calculated using standard biometric software for portrait illustrations.
For example, for a source illustration representing a portrait, the attributes can be calculated based on the morphology of the face (position of the eyes, nose and mouth) and on the orientation of the head at the time of taking sight (right head, slightly tilted to the left, slightly tilted to the right, etc.).
We can also use the algorithm known as SIFT for “Scale Invariant Feature Transform” in English, or the algorithm known as SURF for “Speeded Up Robust Features” in English, both of which are local descriptors which consist, first, detecting a certain number of remarkable points in the image, in order to then calculate a descriptor locally describing the image around each remarkable point. The quality of the descriptor is measured by its robustness to the possible changes that an image can undergo, for example a change of scale and a rotation.
For the SIFT algorithm, described in particular in the publication D. Lowe. Object recognition from local scale-invariant features. IEEE International Conference on Computer Vision, pages 1150-1157, 1999, the detection of points is based on the differences of the Gaussians (DoG) obtained by calculating the difference between each pair of images smoothed by a Gaussian filter, varying to each time the sigma parameter (ie the standard deviation) of the filter. DoGs can be calculated for different levels of scale allowing to introduce the concept of scale space. The detection of potential areas of points of interest / remarkable points is carried out by searching for the extrema according to the plane of the dimension of the image (x, y) and the plane of the scale factor. Then a filtering step is necessary to remove the irrelevant points, by eliminating for example the points whose contrast is too weak.
The calculation of the SIFT descriptor is carried out on an area around each point of interest for example of 16x16 pixels, subdivided into 4x4 areas of 4x4 pixels. On each of the 16 zones, a histogram of the gradient orientations based on 8 intervals is then calculated. The concatenation of the 16 histograms gives a descriptor vector of 128 values.
For the SURF algorithm, described in particular in the publication H. Bay, T. Tuylelaars, and L. Van Gool. Surf: Speeded up robust features. European Conference on Computer Vision, pages 404-417, 2006, the method consists in using the determinant of the Hessian matrix, in calculating an approximation of the second Gaussian derivatives of the image by means of filters at different scales using masks of different sizes (e.g. 9x9, 15x15, 21x21, ...). For the calculation of the orientation of the points and the descriptors around the points, the principle is based on the sums of the responses of the horizontal and vertical Haar wavelets as well as their standards. The circular description area is further divided into 16 regions. A wavelet analysis is performed on each region in order to construct the final descriptor. The latter is made up of the sum of the gradients in x and y as well as the sum of their respective norms for all 16 regions. The descriptor vector is thus made up of 64 values which represent properties extracted both in normal space and in that of the scales of magnitude.
Preferably, there is provided a step consisting in classifying the attributes according to an order of priority of plausibility, which makes it possible to select only the most effective for detecting possible manipulation of the source illustration.
For example, the distance between two eyes of an adult human being is on average 63 mm, and generally between 58 mm and 72 mm. For a source illustration representing a portrait, if an attribute calculates that the distance between the two eyes is greater than a predetermined value, in this case 8 cm, or less than another predetermined value, in this case 5 cm, we can provide that this attribute is rejected (not selected).
We can therefore provide a step consisting in selecting all or part of the calculated attributes.
Preferably, provision is made to select a number of attributes greater than a threshold value recorded in a memory. The more numerous and different the metrics (number of attributes) from each other, the lower the confusion rate.
Footprint
The set of selected attributes defines a digital imprint of the source illustration.
Once the attributes have been selected, the digital fingerprint can then be saved in a memory. In this case, the digital fingerprint is saved as a data vector in temporary memory. Typically, the data vector includes the values of the selected attributes, juxtaposed two by two.
One can also provide a step consisting in adding to the digital footprint extrinsic data to the source illustration, in particular data intrinsic to the environment of which the source illustration is integral.
For example, for a source illustration such as a photograph in an environment such as a document, in particular an identity document, one can provide at least one of the data sets among:
- data relating to the bearer of said document and entered in said document, for example the surname, first name, size, date of birth of the bearer, etc. ; and which can facilitate document control,
- data relating to said document, for example information useful for the use of said document (date of validity, perimeter of use, ...) preferably the authenticity of which is proven by the cryptographic signature described later,
- metadata relating to the source illustration, for example metadata relating to the colors of the source illustration, as described later;
- metadata relating to the document, for example:
o a classification, data from external databases, conditions of use, etc. ;
o a "payload" such as a fingerprint, an iris fingerprint, etc. the document holder, represented in the form of a code of details; or
- the date of creation of the source illustration or creation of the 2D barcode described below.
By convention, we therefore call "fingerprint" the set of selected attributes and the set of selected attributes to which extrinsic data are added to the source illustration.
Compression
The digital fingerprint is then preferably compressed so as to represent only a few bytes of information.
Signature
The digital fingerprint, possibly compressed, is then signed by means of a cryptographic signature which makes it possible to prove that all of this information was issued by a trusted source.
Advantageously, provision is made to sign by means of a cryptographic signature with public key, in particular compact, preferably using cryptography on elliptic curves, and for example according to the ECDSA algorithm for Elliptic Curve Digital Signature Algorithm in English.
This signature exploits the asymmetry between the private key and the public key and makes it possible to securely sign the digital fingerprint, while ensuring:
- on the one hand, that nobody with the certificate is able to remake said signature of the digital fingerprint (and thus to be able to make believe that the content comes from a trusted source when it is not) ; and
- on the other hand, that everyone can verify the authenticity of the digital fingerprint and the identity of their signatory, using a safe key provided for example in an application on a communicating object (telephone, smartphone, tablet, laptop, etc.) equipped with an optical lens and a display screen, or in ad hoc software.
2D barcode
A step is foreseen consisting in encoding a message in the form of a two-dimensional barcode, called 2D barcode, represented by pixels.
The message includes one of:
- a set of at least one attribute,
- the digital footprint,
- the compressed digital fingerprint,
- the signed uncompressed digital fingerprint, and
- the signed compressed digital imprint.
It can also be provided that the message further comprises:
- a set of at least one remarkable point from which the attributes are calculated, which typically makes it possible to calculate the footprint only on only part of a source illustration.
The 2D barcode is then made integral with the source illustration, for example by printing on the support of the source illustration, and in particular on a page of an identity document. The 2D barcode can also be printed as a label stuck to the support of the source illustration. Other techniques can be implemented, for example by engraving or other, as long as the 2D barcode can be recognized optically.
It is expected that the 2D barcode is arranged in a predefined manner with respect to the source illustration, that is to say that its shape is predefined, its dimensions are predefined and the relative position between the 2D barcode and the the source illustration is also predefined.
In the field of art, if the source illustration is a painting, we can provide that the 2D barcode is printed on the support of the painting, for example a canvas, and preferably hidden by the frame of the painting. this ; if the source illustration is a sculpture, we can provide that the 2D barcode is printed or engraved on the base of the latter.
It can be foreseen that the 2D barcode frames the source illustration, in this case the 2D barcode has a shape of a polygonal frame, and more particularly a rectangular frame, as illustrated in FIG. 2, which is advantageous in the field of security.
Preferably, the relative position of the illustration and of the 2D barcode surrounding it includes a positional hazard and this relative position is an attribute, which makes it possible to further secure the digital fingerprint. Indeed, two identical illustrations (or the same source illustration) generate a first and a second almost identical 2D barcode. But thanks to the positional hazard, the relative position of the first 2D barcode and the relative position of the second 2D barcode is different. In particular, the positional hazard is a predetermined hazard and not a mechanical hazard due, for example, to manufacturing.
Thus, in the field of security, it may happen that an identity document is lost and that the bearer of said lost document has a new identity document made with the same photograph as that used for the lost identity document. . In this case, since the position of the photograph on the new identity document is not exactly in the same position as on the lost identity document, then the corresponding attribute of the new document is different from the corresponding attribute of the lost identity document. It is thus possible to distinguish two (otherwise identical) versions of the same document.
Typically, the 2D barcode is built within a set of landmarks. These bitters allow you to straighten by processing both the source illustration and the 2D barcode. The number of landmarks can be adapted according to the target surface on which the source illustration is affixed / printed / inscribed / pasted / displayed / etc. Indeed, the target surface can be flat but also cylindrical, conical, frustoconical, etc. The elements to be straightened are included in it to ensure their optimal straightening.
For a flat target surface, as illustrated in FIG. 2, preferably three or four bitters are provided which surround the source illustration to be secured.
The encoding makes it possible to enter, in the immediate proximity of the source illustration and in a coded manner, security elements which will ensure easy verification by means of a communicating object or any camera (including webcam).
The immediate proximity of the source illustration and the 2D barcode ensures a certain security in that the alteration (intentional or not) of the source illustration risks damaging the 2D barcode, and vice versa. In addition, it allows the source artwork and 2D barcode to be read simultaneously by an optical sensor
Redundancy encoding
The 2D barcode includes a number of message redundancy properties to avoid later reading difficulties. One possible implementation is the use of a correction code, typically one of the codes among:
a Hamming Code a Golay Code a Reed-Müller Code a Goppa Code a Xing Code, and a Reed-Solomon code.
An example of a 2D barcode encoding method is to create a plurality of blocks of M rows x N columns of binary elements each, with M and N natural integers both greater than or equal to 3.
Preferably M = N so as to obtain square blocks.
Depending on the length of the message, said message can be split into fragments, each fragment being encoded on a respective block. For brevity, we will assimilate message and fragment (s).
The blocks can be distributed in various shapes adapted to the support used. For example, the blocks can be distributed in the background of a photograph or form a particular pattern. The only constraint is that they remain inside the surface covered by the landmarks or in their immediate vicinity, in order to allow a good recovery.
In a coding block, a fragment is encoded on a set of binary elements called "coders" whose position is known and predetermined.
It is proposed here that each coding block contains, in addition to the coding binary elements, a set of non-coding binary elements, different from the coding binary elements, and the position of which is also known and predetermined.
In this case, it is expected that each block Μ x N is organized into:
a set of coding sub-blocks, each coding sub-block comprising a set McxNc of coding binary elements; with Mc and Ne two natural integers such as Mc <M and Nc <N, in particular Mc = M / 2 and Nc = N / 2, and
- a set of non-coding sub-blocks, each non-coding sub-block comprising a set MnxNn of non-coding binary elements; with Mn and Nn two natural integers such as Mn <M and Nn <N, in particular Mn = M / 2 and Nn = N / 2.
According to the invention, each block therefore contains a set of sub-blocks of coding binary elements, and a set of sub-blocks of non-coding binary elements, the position of each sub-block being known and predetermined.
Preferably, if M = N, it is then expected that Mc = Nc and Mn = Mn so as to also obtain square sub-blocks.
Preferably, Mc = Mn and Nc = Nn, so that the coding sub-blocks have the same size as the non-coding sub-blocks.
For example, as illustrated in Figure 4, Mc = Nc = Mn = Nn = 2. Each block is therefore organized into sub-blocks of 2 x 2 coding or non-coding binary elements each, illustrated in bold lines in FIG. 4, and each encoding 4 bits, ie 2 Λ 4 = 16 values.
It is expected that at least part of the non-coding sub-blocks of a given block implement an error correcting code, in this case a Reed-Solomon code, on the data encoded by at least part of the sub -coding blocks.
We can predict that:
- At least one of the non-coding sub-blocks is a synchronization sub-block used to resynchronize the block by means of a bitter, for example a conventional code such as (1, 0, 0, 1) as illustrated FIG. 4 for the sub-block positioned at the top left of the block,
- At least two of the non-coding sub-blocks are securing sub-blocks preferably arranged diametrically opposite, which makes it possible to secure a central diagonal of the block by means of an error correcting code, in this case of Reed-Solomon and for example a Reed-Solomon RS code (5.3) for a 5x5 block, which is illustrated by blank subblocks at the bottom left and at the top right of the 10x10 block in Figure 4,
- At least one of the non-coding sub-blocks is a numbering sub-block making it possible to number the block (numbering from 0 to 15 in this case), which is useful in the case of non-linear organization, and which is illustrated by the sub-block (0, 0, 1, 1) at the bottom right of the block in Figure 4.
The dialing sub-block can be replaced by a securing sub-block, or the like.
Preferably, the synchronization sub-block, the security sub-blocks and the possible numbering sub-block are arranged at the 4 corners of the block, as illustrated in FIG. 4.
Preferably, provision is made for a block:
- at least one line of the block comprises a set of binary elements (respectively a set of sub-blocks) coding adjacent in pairs, said set being surrounded by a set of binary elements (respectively a set of sub-blocks) non-coding, and
- at least one column of the block comprises a set of binary elements (respectively a set of sub-blocks) coding adjacent in pairs, said set being surrounded by a set of binary elements (respectively a set of sub-blocks) non-coding.
In particular, provision may be made for all of the non-coding sub-blocks of a given line to implement an error correcting code of the data coded by the set of coding sub-blocks of said line.
Similarly, it can be provided that all of the non-coding sub-blocks of a given column implement an error correcting code of the data coded by the set of coding subblocks of said column.
Thus, each row and each column of each block benefit from redundancy by an error correcting algorithm, for example a Reed-Solomon code.
In one embodiment, the coding sub-blocks are arranged in the center of the block and surrounded by non-coding sub-blocks.
Thanks to this feature, each block includes a corrector code in two simultaneous perpendicular directions, which makes it possible to limit the risks that stripes, most often linear, prevent the reading of part of the barcode information. 2D.
In particular, it can be provided that each non-coding sub-block of a given line implements a Reed-Solomon RS code (X, Y) of the coding sub-blocks of said line, with:
- X the total number of sub-blocks (coding and non-coding) of said line, and
- Y the number of non-coding sub-blocks of said line.
In this case, with a 10x10 block comprising 5 2x2 sub-blocks distributed in 3 coding sub-blocks and 2 non-coding sub-blocks, we have X = M / 2, that is to say M = 10; X = 5 and Y = X-2 = 3. In this example illustrated in FIG. 4, there is therefore at least one line comprising 3 coding sub-blocks framed by 2 non-coding sub-blocks which each implement a ReedSolomon RS code (5.3).
Similarly, it can be provided that each non-coding sub-block of a given column implements a Reed-Solomon RS code (X ', Y') of the coding sub-blocks of said line, with:
- X 'the total number of sub-blocks (coding and non-coding) of said column, and
- Y 'the number of non-coding sub-blocks of said line.
In this case, with a 10 × 10 block comprising 5 × 2 × 2 sub-blocks divided into 3 coding sub-blocks and 2 non-coding sub-blocks, we have X ′ = M / 2, that is to say M = 10 ; X '= 5 and Y' = X'-2 = 3. In this example illustrated in FIG. 4, there is therefore at least one column comprising 3 coding sub-blocks framed by 2 non-coding sub-blocks which each implement a ReedSolomon RS code (5.3).
In this case, each block therefore comprises 5 × 5 sub-blocks, distributed into 3 × 3 central coding sub-blocks and into 16 non-coding peripheral sub-blocks. It is expected that the 3 x 3 central sub-blocks contain the message, represented by the set of values 1 in FIG. 4. The sub-blocks of the first line, the last line, the first column and the last column of the block constitute the peripheral non-coding sub-blocks. 4 peripheral sub-blocks constitute the corners of the block. The other sub-blocks constitute the central coding sub-blocks.
Once the 2D barcode has been encoded, it is joined to the source illustration, for example by printing on the same support as this.
In particular, one can provide, for more discretion and according to the type of use, that the code is printed with an invisible ink, typically comprising UV or IR pigments, which makes it possible not to be visible, therefore not hinder the reading / viewing of the source illustration in visible light, and still be controllable, by control under UV or IR lighting.
Color encoding
The source illustration is in color. It can be directly attached to its destination support. It can also be transferred to its destination medium. In this case, for example, a source illustration may be an identity photograph which is scanned, possibly resized, and then printed on a destination medium such as an identity document.
On its destination medium, the source illustration can therefore be in color or in grayscale, depending in particular on the transfer method and the medium used (for example laser engraving in grayscale on polycarbonate, color printing on PVC , etc.).
Thus, by “registered illustration” is understood without distinction:
- the source illustration in color per se, for example an identity photograph, in this case the destination medium is the printing paper,
- the source color illustration attached to its destination medium, for example an identity photograph on an identity document, in this case the destination medium is the identity document,
- a copy of the source illustration on its destination medium, in grayscale or in color; for example for an identity photograph printed or engraved on an identity document, official or not, in this case the destination medium may be any,
- a copy of the source illustration, displayed on a display screen, for example of a communicating object, in this case the destination medium is the display screen.
The source illustration can become a registered illustration (for example for an identity document) and then a reconstructed illustration (for example during an inspection).
Preferably, provision is made to secure said inscribed illustration and said destination support, for example by printing, engraving, laser marking, gluing or any other means of attachment (stapling, etc.).
There can be a step consisting in encoding at least certain color information of the source illustration in the 2D barcode. The terms "color" and "color information" are therefore considered without distinction.
Thus, even if the illustration entered is in grayscale, the 2D barcode nevertheless encodes color information of the source illustration, which is particularly useful during decoding, as explained below.
In this case, we plan to encode at least some color information from the source illustration in the message, and in particular as metadata, as follows.
We plan to process the source illustration, the processing includes a step consisting of breaking down, or cutting out, the source illustration into a set of zones, and of associating color information with each zone, which makes it possible to compress the information. .
Advantageously, provision is made to limit the number of zones, so as to limit the number of bytes of the message while retaining relevant information on the color of the source illustration. It is therefore not a question of coding the set of pixels of the source illustration but of finding a compromise between significant information (for example brown hair, blue eyes, etc.) and a relatively small number of bytes. , or as low as possible, to encode this color information.
In a first variant of a first embodiment, illustrated diagrammatically in FIG. 8, each zone is defined by a primitive, determined for example by a shape recognition algorithm, a biometric recognition algorithm or a color gradient.
On a source illustration such as a face, it is thus possible to recognize for example the eyes, the hair, the shape of the face, etc.
Preferably each zone has a simple geometric shape, in this case chosen from a set comprising:
- circles,
- ellipses,
- polygons, in particular rectangles.
The advantage of circles, ellipses and rectangles lies in the fact that they can be described, therefore saved as metadata, in the form of a limited number of shape descriptors, in this case respectively:
- the coordinates of the center and the value of the radius of the circle,
- the coordinates of the foci and the value of the constant of the sum of the distances to the foci, and
- the coordinates of two opposite vertices;
which is equivalent to recording the type of shape and the position of each shape.
Thus, the type of metadata advantageously makes it possible to identify a type of corresponding primitive (circle, ellipse, rectangle, etc.).
Each zone then corresponds to a unique color, typically the average value of the colors of the zone.
It is also possible to provide for a combinatorial reduction step of the number of zones or of primitives, which makes it possible to reduce the number, therefore to obtain a message which is lighter to encode.
In a second variant of the first embodiment, each zone is defined by detecting the contours of the source illustration, for example by any one of a Prewitt filter, a Sobel filter and a Canny filter.
In this variant, provision is made to select only the closed contours, each closed contour corresponding to a zone.
We can then define (calculate) the position of the center of each zone.
Advantageously, only the position of the center of each zone can be recorded, and not the shape obtained by the detection of contours, which also makes it possible to obtain a message which is lighter to encode.
We can then associate a unique color to each defined area and record the unique correspondence between an area and a color for all of the defined areas.
Preferably, provision is made to encode the type of encoding in the metadata field, for example according to a binary value corresponding to the first variant and the other binary value corresponding to the second variant.
Whatever the variant, we can provide a spectral analysis step to determine the average wavelength of each zone, in order to determine a single color per zone.
Each shape, i.e. the set of shape descriptors for the first variant and the coordinates of the center of each closed contour for the second variant, and the color associated with each shape can then be saved, by species as a character string in the metadata field.
In a third variant of the first embodiment, not illustrated, each zone is defined C1 beforehand by applying a static geometric grid having a predetermined shape, in this case that of a face, to the source illustration, possibly resized to the dimensions of the grid.
We can then C2 establish a unique correspondence between each zone of the source illustration and each zone of the digitized registered illustration; and color C3 each area of the digitized inscribed illustration with the color of the corresponding source illustration area extracted from the metadata.
Whatever the variant of the first embodiment, the processing also includes a step consisting in then associating with each zone a respective color, preferably unique (FIG. 5), so as to create a set of dishes.
In a second embodiment, a pretreatment step can be provided, prior to the treatment step described below, and comprising at least one of the steps from:
- a step consisting in smoothing and reducing the noise of the source illustration, for example by applying a median filter to the whole of the digitized source illustration, which makes it possible to increase the number of homogeneous zones in the illustration ;
- a step consisting in resizing the digitized source illustration, preferably by preserving the same ratio between the height and the width of this one. Preferably the dimensions (in unit of length or in number of pixels) according to which the source illustration is resized are predetermined, for example 50x64 (height x width) pixels.
Processing the source illustration includes steps consisting of:
- digitize the source illustration according to a Red, Green, Blue coding (or RGB by anglicism);
- convert the digitized source illustration in the Hue Saturation Value (or Hue Saturation Luminance) space, comprising a hue component, a saturation component and a value component (or luminance), and save in a memory, in this case temporary, at least the source illustration digitized in the shade space (FIG. 14A) and the source illustration digitized in the saturation space (FIG. 14B);
- segment the digitized source illustration into a set of zones, each zone corresponding to the so-called "homogeneous" set of pixels of the digitized source illustration whose hue, respectively saturation, is uniform.
The Hue Saturation Value space is also called HSV by Anglicism. The hue corresponds to the color, the saturation corresponds to the purity of the color and the value corresponds to the brightness (or luminance) of the color.
In this case only the hue component and the saturation component are recorded.
The segmentation step aims to segment, or partition, the n pixels (or points) of the digitized source illustration into k sets (or zones or classes) such as k <n, by minimizing the distance between the points at the inside each zone, the distance being calculated in the color space and in the saturation space, and not in the geometric space (X, Y positions of the pixels in the image).
Typically, the segmentation step is implemented by a learning classification algorithm, in this case unsupervised. In this case, we use k-means partitioning (or k-means in English).
Each class is represented by the barycenter (centroid) of the class and includes the set of pixels closest to each other in the hue space and in the saturation space.
We predefine a number of classes, for example K = 5 classes, and we apply the same classification algorithm for the hue component and for the saturation component. We therefore obtain a set of 5 classes in the color space, noted for example T1 to T5, and a set of 5 classes in the saturation space, noted for example St to S 5 . We can also predict that the number of classes for the color space is different from the number of classes for the saturation space.
Each pixel P of the digitized source illustration can be classified at the end of the segmentation step in a unique class of hue and in a unique class of saturation. We can therefore associate each pixel P with the class Ti (ie [1; K]) in the shade space and with the class Si (ie [1; K]) in the saturation space to which it belongs.
For example, we can define:
- Ρχ, γ (Τ_Ρχ, γ) the hue value of the pixel P with X and Y coordinates; and
- Ρχ, γ (S_P X , Y ) the saturation value of the pixel P with coordinates X and Y.
However, saving each value (T_P XjY ) and (S_P XjY ) for each pixel P XjY would require too many resources.
First, we plan to replace the coordinates of each pixel with the value of the centroid of its class, in the shade space and similarly in the saturation space.
One can thus advantageously define by:
- Ρχ, γ (Ti) the tint value Ti of the barycenter of class i (ie [1; K]) in which the pixel P of coordinates X and Y has been classified, with 0 <Ti <360; and
- Ρχ, γ (Si) the saturation value Si of the barycenter of class i (ie [1; K]) in which the pixel P of coordinates X and Y has been classified, with 0 <Si <100.
For example :
- Pi2,7 (U) means that the pixel with coordinates X = 12 and Y = 7 is classified in the hue component T1, and
- Pi2,7 (S 3 ) means that this same pixel with coordinates X = 12 and Y = 7 is classified in the saturation component S3.
Next, we plan to compress the data for each pixel in the digitized source artwork.
To compress the data, it is planned to traverse the pixels along a predetermined direction of travel, in the shade space as in the saturation space.
In this case, we plan to scan the digitized source illustration column by column, from left to right in columns 1 pixel wide, and up and down along each column.
Instead of coding the tint value Ti of each pixel P X , Y along each column, which would involve coding Y values; provision is made to code, in each column, a sequence of at least a pair of values NTi, with N the number of consecutive pixels classified in the hue component Ti. Similarly, provision is made to code, in each column, a sequence of at least a pair of values NSi, with N the number of consecutive pixels classified in the hue component Si.
Preferably, a separator is provided, in this case a comma, to encode the passage from one column to the next, which avoids encoding the column number.
For example, Figure 1 shows a digitized source illustration of 50 x 64 pixels, 50 pixels per row and 64 pixels per column. In this case X e [1; 50] and Y e [1; 64],
In the saturation space:
In the first column all the pixels are classified in the saturation component Si. The pixels of the first column are therefore coded 64Si.
In the second column, the first 30 pixels are classified in the saturation component Si, the next 20 pixels are classified in the saturation component S 2 , and 14 remaining pixels are classified in the saturation component Si. The pixels in the second column are therefore coded 30Si20S 2 14Si. In this case the second column is therefore coded at most according to 6 values only: 30, Si, 20, S 2 , 14 and Si.
And so on for the following columns.
Similarly in the shade space:
In the first column all the pixels are classified in the saturation component The pixels of the first column are therefore coded 64Τ υ
In the second column, the first 20 pixels are classified in the hue component T 1; the next 10 pixels are classified in the shade component T 2 , the next 10 pixels are classified in the shade component T t and 24 remaining pixels are classified in the shade component T 3 . The pixels of the second column are therefore coded 2OTJOT2IOT24T3. In this case the second column is therefore coded at most according to 8 values only: 20, T b 10, T 2 , 10, T b 24 and T 3 .
And so on for the following columns.
As the Ti and Si values can be redundant by column, it is advantageous to record the set of Ti and Si values separately for i = 1 ... K, in this case in the form of a dictionary.
In the example above, we therefore encode the first two columns as follows:
- 64S 1 , 30S 1 20S 2 14S 1 in the saturation space, and
- 64 ^, 20 ^ 107210 ^ 2473 in the shade space;
the values from T1 to T5 and from S1 to S5 being encoded separately.
The set of couple of values NTi, and the set of values Ti and Si are encoded according to predefined positions of blocks of the 2D bar code described above.
Decoding
It is planned to take an optical capture of the inscribed illustration, for example printed in grayscale, and of the 2D barcode, using an optical objective, preferably simultaneously. The registered illustration is thus digitized and saved in a memory.
For example, the optical objective is that of a communicating object, the communicating object also comprising a memory. Alternatively, the optical objective can be a camera or a webcam, connected to a computer and a memory.
A computer program for decoding described below is stored in memory.
A step is planned which consists in finding the position of the landmarks, for example by using gradient detectors.
Once the landmarks have been identified, a step is planned consisting in straightening the image included between the landmarks, for example by means of the Warp-Affine method in the OpenCV library.
The straightening consists in reconstructing, despite a sometimes non-orthogonal angle of view in the plane of the illustration shown, all the components of the 2D barcode as they would have been on a flat initial surface.
Next, a step is provided consisting in matching the rectified image with a predetermined grid, which makes it possible to read the pixels of the 2D barcode and convert them into a chain of binary symbols.
We can then decode the message, passing these symbols to the inverse algorithm of the algorithm used for encoding.
The signature can then be verified to ensure that it is genuine content emitted by the signatory authority. If this is not the case, the message can be rejected as not authentic.
The characteristics of the digitized registered illustration, the data (attributes) and the metadata are extracted from the 2D barcode, preferably only if the authenticity is verified. These attributes extracted from the 2D barcode are then said to be "read".
In parallel (or in series) with the previous operations, the written illustration read by the optical sensor is processed to extract the same remarkable points and the same attributes from the source illustration selected for the generation of the 2D barcode. These attributes extracted from the digital image of the registered illustration are then said to be "recalculated". In particular, one can consider all or part of the remarkable points initially recorded, depending on the desired level of confidence.
A step is then provided which consists in comparing the value of the difference between the attributes read and the attributes recalculated with a predetermined threshold value recorded in a memory.
Typically the difference between attributes read and attributes recalculated is made using a set of metrics (typically distance or angle ratios), for example Euclidean.
For example, as illustrated in FIG. 3, it is expected that:
- a REy metric corresponds to the distance along the Oy axis, between the right eye and a predetermined fixed reference,
- a metric Ny corresponds to the distance along the axis Oy, between the nose and a predetermined fixed reference,
- a metric My corresponds to the distance along the axis Oy, between the mouth and a predetermined fixed reference,
- an Angle2 metric corresponds to the angle between the pupils and the tip of the nose,
- etc.
By way of nonlimiting example, the predetermined fixed reference mark is for example an edge of the photo, a remarkable point of the face, in this case on the forehead, etc.
If the distance between the attributes read and the attributes recalculated is below the threshold value, we consider that the recorded illustration digitized by the optical sensor is indeed in conformity with the original source illustration, otherwise we consider the recorded illustration as not authentic.
Advantageously, this comparison can be implemented offline. It is therefore possible to verify the authenticity of a source illustration anywhere, using a communicating object and without a network connection.
In the case where the digital imprint also comprises data extrinsic to the source illustration, said extrinsic data (for example the card number, the name, the first name, etc.) decoded from the 2D barcode can then also be displayed to a user on a display screen, and allow him to verify for himself that it is indeed the information present on the document.
Metadata can also be used to control the characteristics specific to the document holder, using additional technical means (for example a fingerprint reader, an iris scanner, etc.). The source illustration (the photograph) thus authenticated, can allow a biometric verification of the wearer.
Color decoding
When the 2D barcode includes color coding, the source artwork can be written in grayscale or color on a medium. For example, the illustration listed is an identity photograph attached to an identity document, the photograph representing the bearer of the identity document.
The digitization of the 2D barcode and of the registered illustration makes it possible, by the decoding described below, to reconstruct (indiscriminately decode or recalculate) the source illustration and to display the reconstructed illustration, in particular on the screen display of the communicating object, which can be useful in particular in the field of security, for example for a control, typically by comparing the reconstructed illustration with the registered illustration, with the document holder, or even with the original source illustration, for example for online control with a digitized source image distributed online.
If the registered illustration is in color, then it is also possible to compare the colors between the registered illustration and the reconstructed illustration, which allows additional control.
It is planned to extract the metadata from said 2D barcode, for example in a temporary memory.
Preferably, A1 is provided to identify the type of encoding (in this case by primitives or by closed contours).
For the first variant of the first embodiment of color encoding, corresponding to the type of encoding by primitives, provision is made to identify each primitive, in this case using the format of said metadata.
We can then A2 apply each primitive to the digitized inscribed illustration.
We can then recalculate each area and colorize each color with its respective associated color from the primitives and colors encoded in said metadata (Figure 6).
It is then planned to display on the display screen of the communicating object the image resulting from the fusion A3 between the recalculated colored zones and the digitized inscribed illustration (FIG. 7), the fusion being known in itself, this which is particularly useful when the illustration listed is in grayscale.
For the second variant of the first embodiment of color encoding, corresponding to the type of encoding by closed contours, provision is first made for B1 to define a set of zones of the registered illustration digitized by the same algorithm. edge detection than that implemented for color encoding.
Thus, the edge detection is therefore implemented for the first time (in the source illustration, for encoding the 2D barcode); and a second time (in the illustration shown, for decoding the 2D barcode).
Therefore, there is a unique B2 correspondence between each area of the source illustration for encoding the 2D barcode and each area of the registered illustration scanned for decoding the 2D barcode. In this case, the position of the center of each area of the source illustration for encoding the 2D barcode is the same as the position of the center of each area of the registered illustration digitized for the decoding of the barcode 2D.
A falsification check can be established, based for example on the comparison of the number of zones or the position of the zones of the source illustration and the zones of the registered illustration.
A reconstructed illustration of the source illustration can be calculated with each zone and each respective associated color (FIG. 11) by associating the color information of each zone of the source illustration with each zone of the digitized inscribed illustration; and for example color B3 each zone of the digitized registered illustration with the color of the corresponding source illustration zone extracted from the metadata.
Typically, the color of each area of the digitized inscribed illustration is determined by the color of each corresponding area of the source illustration, the match being determined by the identical position of the center of each area.
The display may also include a preliminary filling step consisting in applying an ink painting algorithm, better known under Anglicism inpainting, to the reconstructed illustration, that is to say a technique for reconstructing deteriorated or filling images of missing parts of an image, which ensures continuity between two artificial objects, and in this case to avoid having white spaces between two colored areas; or any other equivalent function capable of smoothing, or even erasing, pixelation and / or zoning.
For the second embodiment of color encoding, D1 is expected to extract from the dictionary the values of the set of Ti and Si.
We also plan to calculate a reconstructed illustration of the source illustration in the color space and in the saturation space, by decompressing D2 of the data compressed in the 2D barcode, which amounts to recomposing the set of ( k) areas of the color space and the saturation space. For this purpose, we extract the color information from said 2D barcode and reconstruct it in the shade space and in the saturation space. In this case, we extract the set of sequences of at least a couple of NTi values for each column. We can then reconstruct on this basis the source illustration in the shade space. Similarly, we extract the set of sequences of at least a couple of NSi values for each column. We can then reconstruct on this basis the source illustration in the saturation space.
As the registered illustration has been digitized, provision may be made for a step D3 consisting in resizing the reconstructed illustration in the saturation space and in the hue space to the dimensions of the digitized registered illustration.
D4 is also planned to reconstruct the gloss component from the digitized inscribed illustration, or possibly from the digitized source illustration, in a manner known per se: if the inscribed illustration is in gray levels, we extract it the gloss component directly, and if the illustration entered is in color, then it is transformed into grayscale by averaging the RGB values of the pixels in order to directly extract the gloss component.
We can provide D5 to calculate the components of hue brightness and saturation of the reconstructed illustration by combining all of the (k) zones of the hue space and of the saturation space recomposed in step D2. , and the gloss component reconstructed in step D4.
We thus have the three HSV components of the same dimensions, corresponding to the digitized source illustration. We can then convert D6 the three HSV components into RGB values and display the result of this conversion on the display screen, i.e. display the reconstructed illustration with the colors reconstructed from the 2D barcode .
Thus, even if the illustration registered is in grayscale on its destination support (for example an identity photograph engraved by laser on a polycarbonate support), the decoding of the 2D barcode makes it possible to calculate a reconstructed illustration of the color source illustration at the origin of the registered illustration.
It is thus possible to easily control certain aspects, for example the color of the eyes, the skin, the hair, etc. between the displayed illustration and the identity document holder. It is also possible to detect if the registered illustration has been altered. Indeed, in this case, the 2D barcode does not code exactly the same areas or the same colors. There is therefore a difference between the registered illustration and the displayed illustration, which then presents particularly visible artifacts.
For example, FIG. 12A and FIG. 12B respectively illustrate the decoding of the 2D bar code for which the framed source illustration is original. FIG. 13A and FIG. 13B respectively illustrate the decoding of the 2D barcode for which the source illustrations have been reversed: the 2D barcode of FIG. 13B corresponds to the source illustration of FIG. 12A and the 2D barcode in Figure 13B corresponds to the source illustration in Figure 12B. In this case, the display of the reconstructed illustrations displayed in the lower part of FIGS. 13A and 13B reveals artifacts, for example in FIG. 13B, the shape of the hair in FIG. 12B can be seen in the bottom left corner. This makes it very easy and quick to conclude that the displayed illustration and the 2D barcode do not match, and that one or the other has probably been altered.
It is interesting to note that this process, which can be completely dematerialized, can reveal traces of falsifications comparable to those to which the controllers, for example the border controllers, are used in the control of physical documents. Hence the interest of encoding color even for a purely digital document control (which is not subject to the constraints and technical limitations of printing).
Advantageously, the extraction of the message is implemented automatically.
The present invention makes it possible to authenticate the same source illustration at two separate times, despite the inevitable damage in the life cycle thereof or of a document supporting it, whatever the nature of the supports of the documents (physical or digital) to be checked.
The present invention can also make it possible to authenticate that the copy of a source illustration conforms to the original.
The present invention is not limited to the embodiments described above. For example, it can be implemented in the field of authentication of registered trademarks, for example to authenticate that the mark affixed on a product is indeed the original mark; for the authentication of labels, in particular comprising a manufacturing hazard allowing them to be characterized, in particular in the field of security or of wines and spirits.
In this sense, an illustration within the meaning of the present invention can be a graphic signature within the meaning of patent EP2526531 filed by the applicant.
权利要求:
Claims (11)
[1" id="c-fr-0001]
1. Method for encoding a source color illustration, comprising steps consisting in:
- enter said source illustration in the form of an illustration in grayscale or in color,
- encoding a message in the form of a two-dimensional barcode comprising a set of blocks, each block coding a fragment of said message and comprising a set of rows and columns, and each block comprising a set of coding sub-blocks, each sub-block comprising a set of binary elements;
characterized in that the encoding step comprises preliminary steps consisting in:
- define or identify a set of remarkable points on the source illustration,
- calculate a set of attributes as a function of at least some of the remarkable points of said set,
- select, from among all of the calculated attributes, a set of at least one attribute making it possible to define a digital fingerprint,
- optionally compress said digital fingerprint,
- optionally sign said digital fingerprint using a cryptographic signature,
- record in the message one of:
a set of at least one attribute,
- the digital footprint,
- the compressed digital fingerprint,
- the signed uncompressed digital fingerprint, and
- the digital fingerprint compressed and signed.
[2" id="c-fr-0002]
2. Method according to claim 1, further comprising a step consisting in:
- encode at least some color information from the source illustration in said two-dimensional barcode.
[3" id="c-fr-0003]
3. Method according to any one of the preceding claims, further comprising at least one of the steps consisting in:
- write said two-dimensional barcode on a destination medium;
- securing said registered illustration and said destination medium;
- make the two-dimensional barcode integral with the registered illustration; and
- arrange the two-dimensional barcode in a predefined manner in relation to the registered illustration, optionally by framing said registered illustration.
[4" id="c-fr-0004]
4. Method according to any one of the preceding claims, in which each coding block further comprises a set of non-coding sub-blocks whose position is predefined, the method further comprising, for each coding block, steps consisting in: :
- select a predefined set of coding sub-blocks,
- encoding on a set of at least one predefined non-coding sub-block the result of the application of an error correcting code to the values encoded by said predefined set of coding sub-blocks.
[5" id="c-fr-0005]
5. Method according to any one of the preceding claims, comprising:
- digitize the source illustration if it is not already in digital form; and
Is :
- decompose the digitized source illustration into a set of zones, each zone being defined:
- by a primitive, determined by a shape recognition algorithm, a biometric recognition algorithm or a color gradient; or
- by a contour detection algorithm; or
- by application to the source illustration of a static geometric grid having a predetermined shape, optionally that of a face, the grid comprising a set of zones; and
- associate each zone with a respective color or color information;
Is :
- convert the digitized source illustration, possibly resized, in the Hue Saturation Value space comprising a hue component and a saturation component; and
- compress the color information of the source illustration in the shade space and in the saturation space by:
- decomposing said hue component and said saturation component into a set of respective zones, each zone corresponding to a homogeneous set of pixels whose hue, respectively saturation, is uniform; and
- associate a respective hue and saturation with each zone and save them in a dictionary.
[6" id="c-fr-0006]
6. Method according to any one of the preceding claims, comprising at least one of the steps from:
- add extrinsic data to the source artwork to the digital footprint,
- select a number of attributes greater than a threshold value recorded in a memory from the set of calculated attributes; and
- encode at least certain color information of the source illustration in the form of metadata of said barcode.
[7" id="c-fr-0007]
7. Method according to any one of claims 4 to 6, comprising at least one of the steps consisting in, for each block:
- arrange the coding sub-blocks and the non-coding sub-blocks so that:
o at least one line of the block comprises a set of adjacent coding sub-blocks two by two, said set being surrounded by a set of non-coding sub-blocks, and o at least one column of the block comprises a set of sub- two-by-two adjacent coding blocks, said set being surrounded by a set of non-coding sub-blocks; and
- arrange the coding sub-blocks and the non-coding sub-blocks so that:
o for at least one column comprising a set of coding sub-blocks, each non-coding sub-block of said column encodes a respective result of the application of a respective error correcting code to the values encoded by said set of sub -coding blocks of said column, o for at least one line comprising a set of coding sub-blocks, each non-coding sub-block of said line encodes a respective result of the application of a respective error correcting code to the values encoded by said set of coding sub-blocks of said line.
[8" id="c-fr-0008]
8. Method for authenticating an illustration inscribed in grayscale or in color, derived from a source illustration in color, comprising steps consisting in:
- digitize by an optical sensor:
o an illustration in grayscale or in color, and o a two-dimensional barcode encoded according to an encoding method according to any one of the preceding claims, and encoding a message and metadata;
- decode the message of said two-dimensional barcode, and
- compare the decoded message and the registered illustration.
[9" id="c-fr-0009]
9. The authentication method according to claim 8, comprising steps consisting in:
- in the illustration listed read by the optical sensor:
o define or identify a set of remarkable points on the inscribed illustration read by the optical sensor, o recalculate a set of attributes according to at least some of the remarkable points of said set, o select from the set of recalculated attributes, a set of at least one attribute making it possible to define a digital fingerprint,
comparing the value of the difference between the decoded attributes of said two-dimensional barcode and the recalculated attributes with a predetermined threshold value recorded in a memory, and
- optionally verify the cryptographic signature.
[10" id="c-fr-0010]
10. Authentication method according to any one of claims 8 or 9, further comprising steps consisting in:
- calculate a reconstructed illustration of the source illustration by extracting the message or metadata from said two-dimensional barcode and from said extracted message or said extracted metadata by following one of the sequences of steps A, B, C or D following, consisting of:
A1. identify a set of primitives among said metadata,
A2. apply each primitive to the digitized inscribed illustration to recalculate each area on the digitized inscribed illustration and colorize each area with its respective associated color; and A3, merge said recalculated colored zones and the digitized registered illustration; or
B1. define a set of areas of the registered illustration digitized by an edge detection algorithm;
B2. establish a unique correspondence between each zone of the source illustration and each zone of the digitized registered illustration;
B3. color each area of the digitized inscribed illustration with the color of the corresponding source illustration area extracted from the metadata; or
C1. define a set of areas of the digitized inscribed illustration by applying a static geometric grid having a predetermined shape, optionally that of a face, to the digitized inscribed illustration;
C2. establish a unique correspondence between each zone of the source illustration and each zone of the digitized registered illustration;
C3. color each area of the digitized inscribed illustration with the color of the corresponding source illustration area extracted from the metadata; or
D1. extract from a dictionary the values of all the classes in the shade space and in the saturation space,
D2. recompose the set of zones of the color space and the saturation space by decompressing the two-dimensional barcode,
D3. optionally resize the reconstructed illustration in the saturation space and in the color space to the dimensions of the digitized registered illustration,
D4. reconstruct the gloss component from the digitized inscribed illustration, or from the digitized source illustration,
D5. calculate the hue and saturation gloss components of the reconstructed illustration by combining all the zones of the hue space and of the saturation space recomposed in step D2, and the reconstructed shine component at step D4,
D6. converting the hue and saturation gloss components into red green blue values, and
- display the reconstructed illustration on a display screen; the method optionally further comprising a filling step consisting in applying a restoration algorithm to the reconstructed illustration prior to the display step.
[11" id="c-fr-0011]
11. A computer program comprising program code instructions for executing the steps of the method according to any one of the preceding claims, when said program is executed on a computer.
1/4
类似技术:
公开号 | 公开日 | 专利标题
EP3659070B1|2021-06-23|Method for authenticating an illustration
EP2294558B1|2018-10-31|Method and device for identifying a document printing plate
CN110998598A|2020-04-10|Detection of manipulated images
CA3024562A1|2017-11-23|Method of augmented authentification of a material subject
CA2957774A1|2017-08-11|Process for securing and verifying a document
FR3043230A1|2017-05-05|METHOD AND DEVICE FOR SECURING A DOCUMENT, METHOD AND DEVICE FOR CONTROLLING THEIR CORRESPONDENT AND SECURE DOCUMENT
LU93381B1|2017-06-19|Systems, methods and devices for tamper proofing documents and embedding data in a biometric identifier
FR2944901A1|2010-10-29|DEVICE FOR IDENTIFYING | A PERSON BY HIS IMPRESSION.
EP3210166B1|2020-04-29|Method of comparing digital images
WO2018072102A1|2018-04-26|Method and apparatus for removing spectacles in human face image
EP3173981A1|2017-05-31|Device for preparing a code to be read optically
US11042792B2|2021-06-22|Methods for encoding a source color illustration, for authenticating an illustration, and related computer media
FR3053500A1|2018-01-05|METHOD FOR DETECTING FRAUD OF AN IRIS RECOGNITION SYSTEM
KR101905416B1|2018-10-08|System and method for management of electronic fingerprint for preventing forgery or alteration of art pieces, method for detecting forgery or alteration of art pieces and computer program for the same
FR3091610A1|2020-07-10|Method for processing digital images
BR112020001386A2|2020-08-11|COLOR SOURCE ILLUSTRATION CODING PROCESS, AUTHENTICATION PROCESS FOR ILLUSTRATION INSCRIBED IN GRAY OR COLOR LEVELS AND COMPUTER PROGRAM
WO2016046502A1|2016-03-31|Generation of a personalised animated film
EP3832535A1|2021-06-09|Method for detecting at least one visible element of interest in an input image by means of a convolutional neural network
Zhang et al.2016|Multifeature palmprint authentication
FR3017333B1|2019-06-21|METHOD AND DEVICE FOR SECURING AN OBJECT, METHOD AND DEVICE FOR CONTROLLING THEIR CORRESPONDENT AND SECURE OBJECT
FR3057375A1|2018-04-13|METHOD FOR READING A SERIES OF BIDIMENTIONAL BAR CODES ON A MEDIUM, COMPUTER PROGRAM PRODUCT, AND READING DEVICE THEREFOR
FR2957705A1|2011-09-23|Microcircuit card e.g. bank card, securing method, involves capturing image from part of surface of microcircuit card, processing image to obtain information representing image, and storing information in microcircuit carried by card
FR3054057A1|2018-01-19|AUTHENTICATION METHOD INCREASED FROM A MATERIAL SUBJECT
Mahalakshmi0|A GSA-BASED METHOD IN HUMAN IDENTIFICATION USING FINGER VEIN PATTERNS
同族专利:
公开号 | 公开日
BR112019009299A2|2019-07-30|
WO2018087465A1|2018-05-17|
US20190354822A1|2019-11-21|
US11055589B2|2021-07-06|
CN110073368A|2019-07-30|
MA45806A1|2019-06-28|
CA3042970A1|2018-05-17|
FR3058541A1|2018-05-11|
FR3058541B1|2018-11-23|
EP3659070A1|2020-06-03|
CA3071048A1|2019-01-31|
MA45806B1|2020-04-30|
EP3539058A1|2019-09-18|
EP3659070B1|2021-06-23|
FR3058543B1|2019-06-28|
CA3042970C|2021-03-30|
WO2019020893A1|2019-01-31|
FR3058542A1|2018-05-11|
ZA201903604B|2021-04-28|
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题
DE2759270A1|1977-12-30|1979-07-12|Lie Loek Kit|Protection method for identity cards and passports - has photograph edge which is covered by frame on which pattern and passport number are printed|
EP2581860A1|2011-10-10|2013-04-17|Zortag, Inc.|Method, system, and label for authenticating objects|
EP2743893A1|2012-12-12|2014-06-18|Gemalto SA|Method for securing a document including printed information and corresponding document|
EP2937819A1|2012-12-19|2015-10-28|Denso Wave Incorporated|Information code, method for generating information code, device for reading information code, and system for using information code|
US20140267369A1|2013-03-15|2014-09-18|Pictech Management Limited|Image encoding and decoding using color space|
FR2955288B1|2010-01-18|2013-05-10|Hologram Ind|METHOD FOR SECURING AN OBJECT, AND CORRESPONDING OBJECT|
CN103718195B|2012-02-21|2016-06-29|艾克尼特有限公司|readable matrix code|
US9892355B2|2015-05-20|2018-02-13|The Code Corporation|Barcode-reading system|
JP6486016B2|2014-05-16|2019-03-20|株式会社デンソーウェーブ|Information code generation method, information code, and information code utilization system|
TWI507909B|2014-09-23|2015-11-11|Univ Nat Taipei Technology|Data-bearing encoding system and data-bearing decoding system|
US9892478B2|2015-03-06|2018-02-13|Digimarc Corporation|Digital watermarking applications|
US10521635B2|2015-04-28|2019-12-31|The Code Corporation|Architecture for faster decoding in a barcode reading system that includes a slow interface between the camera and decoder|US11042792B2|2017-07-24|2021-06-22|Surys|Methods for encoding a source color illustration, for authenticating an illustration, and related computer media|
FR3091940A1|2019-01-21|2020-07-24|Surys|Image processing process for identity document|
DE102019129490A1|2019-10-31|2021-05-06|Bundesdruckerei Gmbh|Method and system for the production and verification of a security document|
US20210314316A1|2020-04-06|2021-10-07|Alitheon, Inc.|Local encoding of intrinsic authentication data|
CN111931891B|2020-10-13|2021-08-24|北京博大格林高科技有限公司|Method for constructing anti-counterfeiting graphic code by using novel orthogonal code, anti-counterfeiting graphic code and generation device|
法律状态:
2018-05-11| PLSC| Publication of the preliminary search report|Effective date: 20180511 |
2018-06-20| PLFP| Fee payment|Year of fee payment: 2 |
2019-06-21| PLFP| Fee payment|Year of fee payment: 3 |
2020-06-23| PLFP| Fee payment|Year of fee payment: 4 |
2021-06-23| PLFP| Fee payment|Year of fee payment: 5 |
优先权:
申请号 | 申请日 | 专利标题
FR1660874A|FR3058541B1|2016-11-09|2016-11-09|METHOD FOR AUTHENTICATING AN ILLUSTRATION|
FR1660874|2016-11-09|
FR1753902A|FR3058542A1|2016-11-09|2017-05-03|METHOD FOR AUTHENTICATING AN ILLUSTRATION|US16/633,733| US11042792B2|2017-07-24|2018-07-12|Methods for encoding a source color illustration, for authenticating an illustration, and related computer media|
CA3071048A| CA3071048A1|2016-11-09|2018-07-12|Method for authenticating an illustration|
PCT/FR2018/051761| WO2019020893A1|2016-11-09|2018-07-12|Method for authenticating an illustration|
EP18749465.3A| EP3659070B1|2016-11-09|2018-07-12|Method for authenticating an illustration|
[返回顶部]